54 research outputs found

    Using diffusion MRI to discriminate areas of cortical grey matter

    Get PDF
    Cortical area parcellation is a challenging problem that is often approached by combining structural imaging (e.g., quantitative T1, diffusion-based connectivity) with functional imaging (e.g., task activations, topological mapping, resting state correlations). Diffusion MRI (dMRI) has been widely adopted to analyse white matter microstructure, but scarcely used to distinguish grey matter regions because of the reduced anisotropy there. Nevertheless, differences in the texture of the cortical 'fabric' have long been mapped by histologists to distinguish cortical areas. Reliable area-specific contrast in the dMRI signal has previously been demonstrated in selected occipital and sensorimotor areas. We expand upon these findings by testing several diffusion-based feature sets in a series of classification tasks. Using Human Connectome Project (HCP) 3T datasets and a supervised learning approach, we demonstrate that diffusion MRI is sensitive to architectonic differences between a large number of different cortical areas defined in the HCP parcellation. By employing a surface-based cortical imaging pipeline, which defines diffusion features relative to local cortical surface orientation, we show that we can differentiate areas from their neighbours with higher accuracy than when using only fractional anisotropy or mean diffusivity. The results suggest that grey matter diffusion may provide a new, independent source of information for dividing up the cortex

    Variational Registration of Multiple Images with the SVD based SqN Distance Measure

    Full text link
    Image registration, especially the quantification of image similarity, is an important task in image processing. Various approaches for the comparison of two images are discussed in the literature. However, although most of these approaches perform very well in a two image scenario, an extension to a multiple images scenario deserves attention. In this article, we discuss and compare registration methods for multiple images. Our key assumption is, that information about the singular values of a feature matrix of images can be used for alignment. We introduce, discuss and relate three recent approaches from the literature: the Schatten q-norm based SqN distance measure, a rank based approach, and a feature volume based approach. We also present results for typical applications such as dynamic image sequences or stacks of histological sections. Our results indicate that the SqN approach is in fact a suitable distance measure for image registration. Moreover, our examples also indicate that the results obtained by SqN are superior to those obtained by its competitors.Comment: 12 pages, 5 figures, accepted at the conference "Scale Space and Variational Methods" in Hofgeismar, Germany 201

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Grassmann-Cayley algebra for modeling systems of cameras and the algebraic equations of the manifold of trifocal tensors

    Get PDF
    We show how to use the Grassmann-Cayley algebra to model systems of one, two and three cameras. We start with a brief introduction of the Grassmann-Cayley or double algebra and proceed to demonstrate its use for modeling systems of cameras. In the case of three cameras, we give a new interpretation of the trifocal tensors and study in detail some of the constraints that they satisfy. In particular we prove that simple subsets of those constraints characterize the trifocal tensors, in other words, we give the algebraic equations of the manifold of trifocal tensors

    A Nonlinear Method for Estimating the Projective Geometry of Three Views

    Get PDF
    Given three partially overlapping views of a scene from which a set of point correspondences have been extracted, recover the three trifocal tensors between the three views. We give a new way of deriving the trifocal tensor based on Grassmann-Cayley algebra that sheds some new light on its structure. We show that our derivation leads to a complete characterization of its geometric and algebraic properties which is fairly intuitive, i.e. geometric. We give a set of algebraic constraints which are satisfied by the 27 coefficients of the trifocal tensor and allow to parameterize it minimally with 18 coefficients. We then describe a robust method for estimating the trifocal tensor from point and line correspondences that uses this minimal parameterization. Our experimental results show that this method is superior to the linear methods which had been previously published

    Embedding neurophysiological signals

    No full text
    Neurophysiological time-series recordings of brain activity like the electroencephalogram (EEG) or local field potentials can be decoded by machine learning models in order to either control an application, e.g., for communication or rehabilitation after stroke, or to passively monitor the ongoing brain state of the subject, e.g., in a demanding work environment. A typical decoding challenge faced by a brain-computer interface (BCI) is the small dataset size compared to other domains of machine learning like computer vision or natural language processing. The possibilities to tackle classification or regression problems in BCI are to either train a regular model on the available small training data sets or through transfer learning, which utilizes data from other sessions, subjects, or even datasets to train a model. Transfer learning is non-trivial because of the non-stationary of EEG signals between subjects but also within subjects. This variability calls for explicit calibration phases at the start of every session, before BCI applications can be used online. In this study, we present arguments to BCI researchers to encourage the use of embeddings for EEG decoding. In particular, we introduce a simple domain adaptation technique involving both deep learning (when learning the embeddings from the source data) and classical machine learning (for fast calibration on the target data). This technique allows us to learn embeddings across subjects, which deliver a generalized data representation. These can then be fed into subject-specific classifiers in order to minimize their need for calibration data. We conducted offline experiments on the 14 subjects of the High Gamma EEG-BCI Dataset [1]. Embedding functions were obtained by training EEGNet [2] using a leave-one-subject-out (LOSO) protocol, and the embedding vectors were classified by the logistic regression algorithm. Our pipeline was compared to two baseline approaches: EEGNet without subject-specific calibration and the standard FBCSP pipeline in a within-subject training. We observed that the representations learned by the embedding functions were indeed non-stationary across subjects, justifying the need for an additional subject-specific calibration. We also observed that the subject-specific calibration indeed improved the score. Finally, our data suggest, that building upon embeddings requires fewer individual calibration data than the FBCSP baseline to reach satisfactory scores
    • …
    corecore